贝叶斯优化(BO)是一种广泛使用的顺序方法,用于对复杂和昂贵计算的黑盒功能进行零阶优化。现有的BO方法假设功能评估(反馈)可立即或固定延迟后可用。在许多现实生活中的问题(例如在线建议,临床试验和超参数调谐)中,此类假设可能不实用,在随机延迟后可以提供反馈。为了从这些问题中的实验并行化中受益,学习者需要开始新的功能评估,而无需等待延迟反馈。在本文中,我们认为BO在随机延迟反馈问题下。我们提出了带有子线性后悔的算法,可以确保有效解决选择新功能查询的困境,同时等待随机延迟的反馈。在我们的结果的基础上,我们还为批处理和上下文高斯工艺匪徒做出了新的贡献。合成和现实生活数据集的实验验证了我们的算法的性能。
translated by 谷歌翻译
贝叶斯优化(BO)已成为黑框函数的顺序优化。当BO用于优化目标函数时,我们通常可以访问对潜在相关功能的先前评估。这就提出了一个问题,即我们是否可以通过元学习(meta-bo)来利用这些先前的经验来加速当前的BO任务,同时确保稳健性抵抗可能破坏BO融合的潜在有害的不同任务。本文介绍了两种可扩展且可证明的稳健元算法:稳健的元高斯过程 - 加工置信度结合(RM-GP-UCB)和RM-GP-thompson采样(RM-GP-TS)。我们证明,即使某些或所有以前的任务与当前的任务不同,这两种算法在渐近上都是无重组的,并且证明RM-GP-UCB比RM-GP-TS具有更好的理论鲁棒性。我们还利用理论保证,通过通过在线学习最大程度地减少遗憾,优化分配给各个任务的权重,从而减少了相似任务的影响,从而进一步增强了稳健性。经验评估表明,(a)RM-GP-UCB在各种应用程序中都有效,一致地性能,(b)RM-GP-TS,尽管在理论上和实践中都比RM-GP-ucb稳健,但在实践中,在竞争性中表现出色某些方案具有较小的任务,并且在计算上更有效。
translated by 谷歌翻译
测量贡献是合作游戏理论中的一个经典问题,其中沙普利价值是最著名的解决方案概念。在本文中,我们在参数贝叶斯学习游戏中建立了沙普利价值的收敛属性,玩家使用其组合数据进行贝叶斯推断,后端kl差异被用作特征函数。我们表明,对于任何两个玩家,在某些规律性的条件下,其在Shapley价值上的差异与限制性游戏的Shapley值的差异有关,其特征功能与联合Fisher信息的对数确定性成正比。作为一个应用程序,我们介绍了一个在线协作学习框架,该框架是渐近的沙普利 - 费尔。我们的结果使得可以实现这一目标,而无需对后端KL差异的任何昂贵计算。仅需要一致的Fisher信息估计器。使用现实世界数据通过实验证明了我们框架的有效性。
translated by 谷歌翻译
本文提出了一种新颖的协作生成建模(CGM)框架,可激励自私各方之间的合作,以将数据贡献给池,用于培训生成模型(例如,GaN),从中绘制并将其分发给奖励的合成数据符合他们的贡献。将合成数据分配为奖励(而不是培训的型号或金钱)为下游学习任务提供任务和模型无关效益,并且不太可能违反数据隐私监管。为了实现框架,我们首先使用最大平均差异(MMD)提出数据估值函数,这些归属差异(MMD)在其近距离真实数据分布方面基于其数量和质量来提出数据,并提供指导我们MMD中的内核选择的理论结果基于数据估值功能。然后,我们将奖励方案制定为线性优化问题,当解决时,保证CGM框架中的某些激励措施如公平性。我们设计了一种加权采样算法,用于生成待分发的合成数据作为奖励,使得其数据的值和合成数据组合将其分配的奖励值与奖励方案相匹配。我们经验展示了使用派对合成数据奖励的模拟和实际数据集以符合其贡献。
translated by 谷歌翻译
The growing literature of Federated Learning (FL) has recently inspired Federated Reinforcement Learning (FRL) to encourage multiple agents to federatively build a better decision-making policy without sharing raw trajectories. Despite its promising applications, existing works on FRL fail to I) provide theoretical analysis on its convergence, and II) account for random system failures and adversarial attacks. Towards this end, we propose the first FRL framework the convergence of which is guaranteed and tolerant to less than half of the participating agents being random system failures or adversarial attackers. We prove that the sample efficiency of the proposed framework is guaranteed to improve with the number of agents and is able to account for such potential failures or attacks. All theoretical results are empirically verified on various RL benchmark tasks.
translated by 谷歌翻译
最近,神经体系结构搜索(NAS)已应用于在现实世界应用中自动化神经网络的设计。已经开发了大量算法,以提高NAS中最终选定架构的搜索成本或性能。不幸的是,这些NAS算法旨在仅从其搜索空间中选择一个表现良好的架构,因此忽略了神经网络合奏的能力(即具有多样化体系结构的神经网络的集合)在实现单个最终选定中的性能方面提高了性能建筑学。为此,我们介绍了一种新型的神经合奏搜索算法,通过贝叶斯采样(NESB)称为神经合奏搜索,以有效有效地从NAS搜索空间中选择良好的表现性神经网络集合。在我们的广泛实验中,NESBS算法被证明能够比最先进的NAS算法提高性能,同时产生可比的搜索成本,从而表明我们的NESBS算法在实践中的NESB算法优越。
translated by 谷歌翻译
The Shapley value (SV) is adopted in various scenarios in machine learning (ML), including data valuation, agent valuation, and feature attribution, as it satisfies their fairness requirements. However, as exact SVs are infeasible to compute in practice, SV estimates are approximated instead. This approximation step raises an important question: do the SV estimates preserve the fairness guarantees of exact SVs? We observe that the fairness guarantees of exact SVs are too restrictive for SV estimates. Thus, we generalise Shapley fairness to probably approximate Shapley fairness and propose fidelity score, a metric to measure the variation of SV estimates, that determines how probable the fairness guarantees hold. Our last theoretical contribution is a novel greedy active estimation (GAE) algorithm that will maximise the lowest fidelity score and achieve a better fairness guarantee than the de facto Monte-Carlo estimation. We empirically verify GAE outperforms several existing methods in guaranteeing fairness while remaining competitive in estimation accuracy in various ML scenarios using real-world datasets.
translated by 谷歌翻译
链接预测(LP)已被认为是图形学习的重要任务,其广泛的实际应用。 LP的典型应用是为给定的源节点(例如朋友推荐)检索最高的评分邻居。这些服务希望具有很高的推理可伸缩性,以找到低潜伏期中许多候选节点的最高评分邻居。最近有两个流行的解码器主要用于计算节点嵌入的边缘得分:HadamArdMLP和DOT产品解码器。经过理论和经验分析后,我们发现HadamardMLP解码器通常对LP更有效。但是,HadamardMLP缺乏在大图上检索最高得分的邻居的可扩展性,因为据我们所知,并不存在算法来检索sublinearearightions中的HadamardMLP解码器的最高得分邻居。为了使HadamardMLP可扩展,我们建议使用手电筒算法加速HadamardMLP的最高得分邻居检索:一种弹性算法,该算法逐渐应用了具有适应性调整的查询嵌入的近似最大内部产品搜索(MIPS)技术。经验结果表明,手电筒在不牺牲效力的情况下将LP的推理速度提高了100倍以上。我们的工作为大规模LP应用程序铺平了道路,并通过大大加速其推断,并通过有效的HadamArdMLP解码器铺平了道路。
translated by 谷歌翻译
尽管在文本到语音综合的生成建模方面取得了最新进展,但这些模型尚未具有与螺距条件确定性模型(例如FastPitch和fastspeech2)相同的细粒度可调节性。音调信息不仅是低维度,而且是不连续的,这使得在生成环境中建模特别困难。我们的工作探讨了在正常流量模型的背景下处理上述问题的几种技术。我们还发现这个问题非常适合神经条件流,这是归一化流中更常见的仿射耦合机制的高度表达替代品。
translated by 谷歌翻译
Adversarial imitation learning (AIL) has become a popular alternative to supervised imitation learning that reduces the distribution shift suffered by the latter. However, AIL requires effective exploration during an online reinforcement learning phase. In this work, we show that the standard, naive approach to exploration can manifest as a suboptimal local maximum if a policy learned with AIL sufficiently matches the expert distribution without fully learning the desired task. This can be particularly catastrophic for manipulation tasks, where the difference between an expert and a non-expert state-action pair is often subtle. We present Learning from Guided Play (LfGP), a framework in which we leverage expert demonstrations of multiple exploratory, auxiliary tasks in addition to a main task. The addition of these auxiliary tasks forces the agent to explore states and actions that standard AIL may learn to ignore. Additionally, this particular formulation allows for the reusability of expert data between main tasks. Our experimental results in a challenging multitask robotic manipulation domain indicate that LfGP significantly outperforms both AIL and behaviour cloning, while also being more expert sample efficient than these baselines. To explain this performance gap, we provide further analysis of a toy problem that highlights the coupling between a local maximum and poor exploration, and also visualize the differences between the learned models from AIL and LfGP.
translated by 谷歌翻译